In recent years, graph representation learning has achieved remarkable success while suffering from low-quality data problems. As a mature technology to improve data quality in computer vision, data augmentation has also attracted increasing attention in graph domain. For promoting the development of this emerging research direction, in this survey, we comprehensively review and summarize the existing graph data augmentation (GDAug) techniques. Specifically, we first summarize a variety of feasible taxonomies, and then classify existing GDAug studies based on fine-grained graph elements. Furthermore, for each type of GDAug technique, we formalize the general definition, discuss the technical details, and give schematic illustration. In addition, we also summarize common performance metrics and specific design metrics for constructing a GDAug evaluation system. Finally, we summarize the applications of GDAug from both data and model levels, as well as future directions.
translated by 谷歌翻译
Neural-symbolic computing aims at integrating robust neural learning and sound symbolic reasoning into a single framework, so as to leverage the complementary strengths of both of these, seemingly unrelated (maybe even contradictory) AI paradigms. The central challenge in neural-symbolic computing is to unify the formulation of neural learning and symbolic reasoning into a single framework with common semantics, that is, to seek a joint representation between a neural model and a logical theory that can support the basic grounding learned by the neural model and also stick to the semantics of the logical theory. In this paper, we propose differentiable fuzzy $\mathcal{ALC}$ (DF-$\mathcal{ALC}$) for this role, as a neural-symbolic representation language with the desired semantics. DF-$\mathcal{ALC}$ unifies the description logic $\mathcal{ALC}$ and neural models for symbol grounding; in particular, it infuses an $\mathcal{ALC}$ knowledge base into neural models through differentiable concept and role embeddings. We define a hierarchical loss to the constraint that the grounding learned by neural models must be semantically consistent with $\mathcal{ALC}$ knowledge bases. And we find that capturing the semantics in grounding solely by maximizing satisfiability cannot revise grounding rationally. We further define a rule-based loss for DF adapting to symbol grounding problems. The experiment results show that DF-$\mathcal{ALC}$ with rule-based loss can improve the performance of image object detectors in an unsupervised learning way, even in low-resource situations.
translated by 谷歌翻译
In neural architecture search (NAS) methods based on latent space optimization (LSO), a deep generative model is trained to embed discrete neural architectures into a continuous latent space. In this case, different optimization algorithms that operate in the continuous space can be implemented to search neural architectures. However, the optimization of latent variables is challenging for gradient-based LSO since the mapping from the latent space to the architecture performance is generally non-convex. To tackle this problem, this paper develops a convexity regularized latent space optimization (CR-LSO) method, which aims to regularize the learning process of latent space in order to obtain a convex architecture performance mapping. Specifically, CR-LSO trains a graph variational autoencoder (G-VAE) to learn the continuous representations of discrete architectures. Simultaneously, the learning process of latent space is regularized by the guaranteed convexity of input convex neural networks (ICNNs). In this way, the G-VAE is forced to learn a convex mapping from the architecture representation to the architecture performance. Hereafter, the CR-LSO approximates the performance mapping using the ICNN and leverages the estimated gradient to optimize neural architecture representations. Experimental results on three popular NAS benchmarks show that CR-LSO achieves competitive evaluation results in terms of both computational complexity and architecture performance.
translated by 谷歌翻译
创伤性脑损伤(TBI)患者的脑网络分析对于其意识水平评估和预后评估至关重要,这需要分割某些意识相关的大脑区域。但是,由于很难收集TBI患者的手动注释的MR扫描,因此很难构建TBI分割模型。数据增强技术可用于缓解数据稀缺问题。但是,常规数据增强策略(例如空间和强度转化)无法模仿创伤性大脑中的变形和病变,这限制了后续分割任务的性能。为了解决这些问题,我们提出了一种名为TBIGA的新型医学图像授课模型,以通过配对的脑标签图合成TBI MR扫描。我们的TBIGAN方法的主要优势在于,它可以同时生成TBI图像和相应的标签映射,这在以前的医学图像的先前涂上方法中尚未实现。我们首先按照粗到细节的方式在边缘信息的指导下生成成分的图像,然后将合成强度图像用作标签上填充的先验。此外,我们引入了基于注册的模板增强管道,以增加合成图像对的多样性并增强数据增强能力。实验结果表明,提出的TBIGAN方法可以产生具有高质量和有效标签图的足够合成的TBI图像,这可以大大改善与替代方案相比的2D和3D创伤性脑部分割性能。
translated by 谷歌翻译
最近几天,流媒体技术极大地促进了直播领域的发展。由于直播记录的长度过多,因此提取突出显示细分市场至关重要,以有效地生殖和重新分布。尽管事实证明,有很多方法可以有效地检测其他模式,但直播处理中存在的挑战,例如极端持续时间,大主题转移,无关紧要的信息等等,因此严重阻碍了这些这些的适应性和兼容性方法。在本文中,我们制定了一个新的任务直播突出显示检测,讨论和分析上面列出的困难,并提出了一种新的建筑抗议,以解决此问题。具体而言,我们首先将原始数据编码为多个视图,并对其时间关系进行建模,以捕获层次注意机制中的线索。之后,我们尝试将突出显示剪辑的检测转换为搜索最佳决策序列的搜索,并使用完全集成的表示形式来预测动态编程机制中的最终结果。此外,我们构建了一个完全注重的数据集Anthighlight,以实例化此任务并评估模型的性能。广泛的实验表明我们提出的方法的有效性和有效性。
translated by 谷歌翻译
主成分分析(PCA)是一种用于矢量数据的流行尺寸减少技术。因子PCA(FPCA)是PCA的PCA用于矩阵数据的概率扩展,这可以大大降低PCA中的参数数,同时产生令人满意的性能。然而,FPCA基于高斯假设,从而易于异常值。虽然将多元$ T $分布作为矢量数据的强大建模工具具有很长的历史,但其对矩阵数据的应用非常有限。主要原因是矢量化矩阵数据的维度通常非常高,尺寸越高,测量稳健性的击穿点越低。为了解决FPCA遭受的稳健性问题,并使其适用于矩阵数据,本文提出了一种强大的FPCA(RFPCA)的扩展,这是一个被称为矩阵 - 变化$ T $分布的$ T $ -Type分布。与多元$ T $分布一样,Matrix-Variate $ T $分布可以自适应地降价异常值并屈服于强大的估计。我们开发了一种用于参数估计的快速EM型算法。综合性和现实世界数据集的实验表明,RFPCA比较有利地与若干相关方法,RFPCA是一个简单但有力的矩阵值异常检测工具。
translated by 谷歌翻译
面部识别技术已被广泛采用,许多使命批判性方案,如人类识别,受控入门和移动设备访问等手段等。安全监测是人脸识别技术的典型情景。因为监视视频和图像的低分辨率特征使得高分辨率面部识别算法难以提取有效特征信息,所应用于高分辨率面部识别的算法难以直接迁移到低分辨率情况。由于安全监控中的人脸识别在密集城市化时代变得更加重要,因此开发能够在处理低分辨率监视摄像机产生的视频帧时能够提供令人满意的性能的算法。本文详细阐述了利用均匀低分辨率监视视频,理论,实验细节和实验结果的基于相关特征的面部识别(Coffar)方法。实验结果验证了相关特征方法的有效性,从线监控安全方案中提高了均匀面部识别的准确性。
translated by 谷歌翻译
难以通过二进制面具手动准确标记含糊不清的和复杂形状的目标。在医学图像分割中突出显示二元掩模下面的弱点,其中模糊是普遍的。在多个注释的情况下,通过二元面具对临床医生达成共识更具挑战性。此外,这些不确定的区域与病变结构有关,可能含有有利于诊断的解剖信息。然而,目前关于不确定性的研究主要关注模型培训和数据标签的不确定性。他们都没有调查病变本身的模糊性质的影响。通过图像消光,透过图像消光,将Alpha Matte作为软片介绍,代表医学场景中不确定的区域,并因此提出了一种新的不确定性量化方法来填补填补差距病变结构的不确定性研究。在这项工作中,我们在多任务框架中引入了一种新的架构,以在多任务框架中生成二进制掩模和alpha掩饰,这优于所有最先进的消光算法。建议的不确定性地图能够突出模糊地区和我们提出的新型多任务损失加权策略可以进一步提高性能并证明其具体的益处。为了充分评估我们提出的方法的有效性,我们首先用alpha哑布标记了三个医疗数据集,以解决医学场景中可用消光数据集的短缺,并证明alpha遮罩是一种比定性的二进制掩模更有效的标签方法和量化方面。
translated by 谷歌翻译
本文开发了用于多交叉路口自适应交通信号控制(TSC)的分散增强学习(RL)方案,称为“CVlight”,其利用从连接的车辆(CVS)收集的数据。国家和奖励设计促进了代理商之间的协调,并考虑由CVS收集的旅行延误。提出了一种新颖的算法,非对称优势演员 - 评论家(EB-A2C),其中CV和非CV信息都用于培训批评网络,而仅使用CV信息来执行最佳信号定时。综合实验表明,CVlight的优越性在一个2×2合成道路网络下的最先进的算法,各种交通需求模式和穿透速率。然后,学习的政策被可视化以进一步展示ASYM-A2C的优点。采用火车前技术来提高CVlight的可扩展性,这显着缩短了培训时间,并在5×5路网络下表现出性能的优势。在美国宾夕法尼亚州宾夕法尼亚州州学院的2×2路网络上进行了一个案例研究,以进一步展示了在现实世界方案下所提出的算法的有效性。与其他基线模型相比,训练有素的CVlight代理可以仅基于CV数据有效地控制多个交叉点,达到最佳性能,特别是在低CV渗透率下。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译